reward source
A Appendix
X belongs to A, when X is sampled from distribution ν . We use six games on Atari: AirRaid, Asteroids, Pong, MsPacman, Gopher and UpNDown. The reward functions for these environments are set as follows. The environment settings for maze environments are shown in Table 1. For "maze-multireward" environment, the orange square awards the agent for Environment settings V alues for Maze V alues for Atari Stack size 1 4 Frame skip 1 4 One-frame observation shape (84, 84, 3) (84, 84) Agent's observation shape (84, 84, 3) (84, 84, 4) γ 0.99 0.99 Reward clipping -- true Terminate on Life Loss -- true Sticky Actions false false Table 1: Environment settings for maze and Atari.
Explainable Deterministic MDPs
We present a method for a certain class of Markov Decision Processes (MDPs) that can relate the optimal policy back to one or more reward sources in the environment. For a given initial state, without fully computing the value function, q-value function, or the optimal policy the algorithm can determine which rewards will and will not be collected, whether a given reward will be collected only once or continuously, and which local maximum within the value function the initial state will ultimately lead to. We demonstrate that the method can be used to map the state space to identify regions that are dominated by one reward source and can fully analyze the state space to explain all actions. We provide a mathematical framework to show how all of this is possible without first computing the optimal policy or value function.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- North America > United States > Iowa > Story County > Ames (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
Memoryless Exact Solutions for Deterministic MDPs with Sparse Rewards
We propose an algorithm for deterministic continuous Markov Decision Processes with sparse rewards that computes the optimal policy exactly with no dependency on the size of the state space. The algorithm has time complexity of $O( |R|^3 \times |A|^2 )$ and memory complexity of $O( |R| \times |A| )$, where $|R|$ is the number of reward sources and $|A|$ is the number of actions. Furthermore, we describe a companion algorithm that can follow the optimal policy from any initial state without computing the entire value function, instead computing on-demand the value of states as they are needed. The algorithm to solve the MDP does not depend on the size of the state space for either time or memory complexity, and the ability to follow the optimal policy is linear in time and space with the path length of following the optimal policy from the initial state. We demonstrate the algorithm operation side by side with value iteration on tractable MDPs.
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- North America > United States > Iowa > Story County > Ames (0.04)
Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards
Bertram, Joshua R., Yang, Xuxi, Wei, Peng
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or action space. However, most of these classical solution approaches and their approximation techniques still take much computation time to converge and usually must be re-computed if the reward function is changed. This paper introduces a novel alternative approach for exactly and efficiently solving deterministic, continuous MDPs with sparse reward sources. When the environment is such that the "distance" between states can be determined in constant time, e.g. grid world, our algorithm offers $O( |R|^2 \times |A|^2 \times |S|)$, where $|R|$ is the number of reward sources, $|A|$ is the number of actions, and $|S|$ is the number of states. Memory complexity for the algorithm is $O( |S| + |R| \times |A|)$. This new approach opens new avenues for boosting computational performance for certain classes of MDPs and is of tremendous value for MDP applications such as robotics and unmanned systems. This paper describes the algorithm and presents numerical experiment results to demonstrate its powerful computational performance. We also provide rigorous mathematical description of the approach.
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- North America > United States > Iowa > Story County > Ames (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.49)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.47)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.46)